首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9209篇
  免费   13篇
工业技术   9222篇
  2022年   1篇
  2021年   8篇
  2020年   11篇
  2019年   5篇
  2018年   8篇
  2017年   2篇
  2016年   9篇
  2015年   9篇
  2014年   216篇
  2013年   181篇
  2012年   767篇
  2011年   2287篇
  2010年   1110篇
  2009年   946篇
  2008年   666篇
  2007年   588篇
  2006年   441篇
  2005年   567篇
  2004年   516篇
  2003年   572篇
  2002年   273篇
  2001年   1篇
  2000年   2篇
  1999年   4篇
  1998年   8篇
  1997年   3篇
  1996年   2篇
  1995年   4篇
  1994年   2篇
  1993年   3篇
  1991年   2篇
  1989年   4篇
  1987年   2篇
  1982年   1篇
  1981年   1篇
排序方式: 共有9222条查询结果,搜索用时 15 毫秒
991.
Design formalism for collaborative assembly design   总被引:1,自引:0,他引:1  
Joints in product design are common because of the limitations of component geometric configurations and material properties, and the requirements of inspection, accessibility, repair, and portability. Collaborative product design is emerging as a viable alternative to the traditional design process. The collaborative assembly design (AsD) methodologies are needed for distributed product development. Existing AsD methodologies have limitations in capturing the non-geometric aspects of designer's intent on joining and are not efficient for a collaborative design environment. This paper introduces an AsD formalism and associated AsD tools to capture joining relations and spatial relationship implications. This AsD formalism allows the joining relations to be modeled symbolically for computer interpretation, and the model can be used for inferring mathematical and physical implications. An AsD model generated from the AsD formalism is used to exchange AsD information transparently in a collaborative AsD environment. An assembly relation model and a generic assembly relationship diagram are to capture assembly and joining information concisely and persistently. As a demonstration, the developed AsD formalism and AsD tools are applied on a connector assembly with arc weld and rivet joints.  相似文献   
992.
993.
A novel multi-agent image interpretation system has been developed which is markedly different from previous approaches in especially its elaborate high-level knowledge-based control over low-level image segmentation algorithms. Agents dynamically adapt segmentation algorithms based on knowledge about global constraints, contextual knowledge, local image information and personal beliefs. Generally agent control allows the underlying segmentation algorithms to be simpler and be applied to a wider range of problems with a higher reliability.The agent knowledge model is general and modular to support easy construction and addition of agents to any image processing task. Each agent in the system is further responsible for one type of high-level object and cooperates with other agents to come to a consistent overall image interpretation. Cooperation involves communicating hypotheses and resolving conflicts between the interpretations of individual agents.The system has been applied to IntraVascular UltraSound (IVUS) images which are segmented by five agents, specialized in lumen, vessel, calcified-plaque, shadow and sidebranch detection. IVUS image sequences from 7 patients were processed and vessel and lumen contours were detected fully automatically. These were compared with expert-corrected semiautomatically detected contours. Results show good correlations between agents and expert with r=0.84 for the lumen and r=0.92 for the vessel cross-sectional areas, respectively.  相似文献   
994.
Latest advances in hardware technology and state of the art of computer vision and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. The paper proposes a multi-agent architecture for the understanding of scene dynamics merging the information streamed by multiple cameras. A typical application would be the monitoring of a secure site, or any visual surveillance application deploying a network of cameras. Modular software (the agents) within such architecture controls the different components of the system and incrementally builds a model of the scene by merging the information gathered over extended periods of time. The role of distributed artificial intelligence composed of separate and autonomous modules is justified by the need for scalable designs capable of co-operating to infer an optimal interpretation of the scene. Decentralizing intelligence means creating more robust and reliable sources of interpretation, but also allows easy maintenance and updating of the system. Results are presented to support the choice of a distributed architecture, and to prove that scene interpretation can be incrementally and efficiently built by modular software.  相似文献   
995.
A general feature extraction framework is proposed as an extension of conventional linear discriminant analysis. Two nonlinear feature extraction algorithms based on this framework are investigated. The first is a kernel function feature extraction (KFFE) algorithm. A disturbance term is introduced to regularize the algorithm. Moreover, it is revealed that some existing nonlinear feature extraction algorithms are the special cases of this KFFE algorithm. The second feature extraction algorithm, mean-STD1-norm feature extraction algorithm, is also derived from the framework. Experiments based on both synthetic and real data are presented to demonstrate the performance of both feature extraction algorithms.  相似文献   
996.
Text data present in images and video contain useful information for automatic annotation, indexing, and structuring of images. Extraction of this information involves detection, localization, tracking, extraction, enhancement, and recognition of the text from a given image. However, variations of text due to differences in size, style, orientation, and alignment, as well as low image contrast and complex background make the problem of automatic text extraction extremely challenging. While comprehensive surveys of related problems such as face detection, document analysis, and image & video indexing can be found, the problem of text information extraction is not well surveyed. A large number of techniques have been proposed to address this problem, and the purpose of this paper is to classify and review these algorithms, discuss benchmark data and performance evaluation, and to point out promising directions for future research.  相似文献   
997.
In the study, a novel segmentation technique is proposed for multispectral satellite image compression. A segmentation decision rule composed of the principal eigenvectors of the image correlation matrix is derived to determine the similarity of image characteristics of two image blocks. Based on the decision rule, we develop an eigenregion-based segmentation technique. The proposed segmentation technique can divide the original image into some proper eigenregions according to their local terrain characteristics. To achieve better compression efficiency, each eigenregion image is then compressed by an efficient compression algorithm eigenregion-based eigensubspace transform (ER-EST). The ER-EST contains 1D eigensubspace transform (EST) and 2D-DCT to decorrelate the data in spectral and spatial domains. Before performing EST, the dimension of transformation matrix of EST is estimated by an information criterion. In this way, the eigenregion image may be approximated by a lower-dimensional components in the eigensubspace. Simulation tests performed on SPOT and Landsat TM images have demonstrated that the proposed compression scheme is suitable for multispectral satellite image.  相似文献   
998.
This paper presents an iterative spectral framework for pairwise clustering and perceptual grouping. Our model is expressed in terms of two sets of parameters. Firstly, there are cluster memberships which represent the affinity of objects to clusters. Secondly, there is a matrix of link weights for pairs of tokens. We adopt a model in which these two sets of variables are governed by a Bernoulli model. We show how the likelihood function resulting from this model may be maximised with respect to both the elements of link-weight matrix and the cluster membership variables. We establish the link between the maximisation of the log-likelihood function and the eigenvectors of the link-weight matrix. This leads us to an algorithm in which we iteratively update the link-weight matrix by repeatedly refining its modal structure. Each iteration of the algorithm is a three-step process. First, we compute a link-weight matrix for each cluster by taking the outer-product of the vectors of current cluster-membership indicators for that cluster. Second, we extract the leading eigenvector from each modal link-weight matrix. Third, we compute a revised link weight matrix by taking the sum of the outer products of the leading eigenvectors of the modal link-weight matrices.  相似文献   
999.
Motion estimation is one of the kernel issues in MPEG series. In this correspondence, a novel two-phase Hilbert-scan-based search algorithm for block motion estimation is presented. First in the intra-phase, a segmentation of the Hilbert curve is applied to the current block, then a novel coarse-to-fine data structure is developed to eliminate the impossible reference blocks in the search window of the reference frame. In the inter-phase, a new prediction scheme for estimating the initial motion vector of the current block is presented. Experimental results reveal that when compared to the GAPD algorithm, our proposed algorithm has better execution time and estimation accuracy performance. Under the same estimation accuracy, our proposed algorithm has better execution time performance when compared to the FS algorithm. In addition, when comparing with the TSS algorithm, our proposed algorithm has better estimation accuracy performance, but has worse execution time performance.  相似文献   
1000.
The great diffusion of digital cameras and the widespread use of the internet have produced a mass of digital images depicting a huge variety of subjects, generally acquired by unknown imaging systems under unknown lighting conditions. This makes color balancing, recovery of the color characteristics of the original scene, increasingly difficult. In this paper, we describe a method for detecting and removing a color cast (i.e. a superimposed color due to lighting conditions, or to the characteristics of the capturing device), from a digital photo without any a priori knowledge of its semantic content. First a cast detector, using simple image statistics, classifies the input images as presenting no cast, evident cast, ambiguous cast, a predominant color that must be preserved (such as in underwater images or single color close-ups) or as unclassifiable. A cast remover, a modified version of the white balance algorithm, is then applied in cases of evident or ambiguous cast. The method we propose has been tested with positive results on a data set of some 750 photos.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号